|
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit organization founded in 2000 to research safety issues related to the development of Strong AI. Nate Soares is the current Executive Director, taking over from Luke Muehlhauser in May 2015. MIRI's technical agenda states that new formal tools are needed in order to ensure the safe operation of future generations of AI software (friendly artificial intelligence). The organization hosts regular research workshops to develop mathematical foundations for this project, and has been cited as one of several academic and nonprofit groups studying long-term AI outcomes. ==History== In 2000, AI theorist Eliezer Yudkowsky and Internet entrepreneurs Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to "help humanity prepare for the moment when machine intelligence exceeded human intelligence". In early 2005, SIAI relocated from Atlanta, Georgia to Silicon Valley. From 2006 to 2012, the Institute collaborated with Singularity University to produce the Singularity Summit, a science and technology conference. Speakers included Steven Pinker, Peter Norvig, Stephen Wolfram, John Tooby, James Randi, and Douglas Hofstadter. In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality, whose focus is on using ideas from cognitive science to improve people's effectiveness in their daily lives. Having previously shortened its name to "Singularity Institute", in January 2013 SIAI changed its name to the "Machine Intelligence Research Institute" in order to avoid confusion with Singularity University. MIRI gave control of the Singularity Summit to Singularity University and shifted its focus toward research in mathematics and theoretical computer science. In mid-2014, Nick Bostrom's book ''Superintelligence: Paths, Dangers, Strategies'' helped spark public discussion about AI's long-run social impact, receiving endorsements from Bill Gates and Elon Musk. Stephen Hawking and AI pioneer Stuart Russell co-authored a ''Huffington Post'' article citing the work of MIRI and other organizations in the area: Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. () Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.〔 In early 2015, MIRI's research was cited in a research priorities document accompanying an open letter on AI that called for "expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial". Musk responded by funding a large AI safety grant program, with grant recipients including Bostrom, Russell, Bart Selman, Francesca Rossi, Thomas Dietterich, Manuela M. Veloso, and researchers at MIRI.〔 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Machine Intelligence Research Institute」の詳細全文を読む スポンサード リンク
|